cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 20 Documents
Search results for , issue "Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation" : 20 Documents clear
Design of a Big-data-Based Decision Support System for Rational Cultural Policy Establishment Youngseok Lee; Gimoon Cho; Jungwon Cho
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.937

Abstract

This paper proposes a technique for designing a decision-making system based on big data to support rational cultural policy decisions. To identify a rational cultural policy, it is necessary to extract a comparable index for cultural policy and analyze and process factors in terms of cultural supply and cultural consumption. Analyzed and processed supply indices and consumption indices become the basic input data for calculating additional cultural indices that can be measured at the cultural level of each region. Regional cultural indices are treated as independent variables in terms of cultural supply, and target variables are considered in terms of cultural demand. Two corresponding types of regression models are established. Based on the eXtreme gradient boosting and light gradient boosting machine algorithms, which are representative algorithms for calculating cultural indicators, we attempted to construct and analyze a model of the proposed system. The developed model is designed to predict the demand index according to the regional cultural supply index. It was confirmed that the demand side could be changed based on supply-side items by using the proposed technique to support decision-making. Due to the complexity of the policy environment of modern society, mixing various policy tools targeting multiple functions is accepted as a common basis for policy design, but institutional arrangements are needed to reflect the results of various data analyses in budget decision-making. This will be possible to produce data based on effectiveness and suggest appropriate rational policies and decisions.
Customization of Cost Allocation Monitoring Report for Improving Activity-Based Costing Process Risma Nur Damayanti; Muhardi Saputra; Tien Fabrianti Kusumasari
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.927

Abstract

In the age of a global competition environment, accurate costing measurement is essential for every company. The more accurate allocation process to final outputs indicates the potential impact a company's decision has on costs. Activity-based costing is a technique for allocating organizational costs to activities that utilize the organization's resources and then tracing the costs of these activities to products, consumers, or distribution channels that generate profits or losses for the business. With a large number of cost allocations in the business processes, it makes it difficult for companies to identify the number of costs that have been allocated, especially if the data that must be processed is in large quantities. To overcome this problem, it is necessary to require cost mapping for the business process from resource to cost center to compare the number of costs that have been allocated. This research discusses the application of monitoring reports by using ALV customization in XYZ Ltd. This report was created using an iterative and incremental model approach. The simulation results show a 50% reduction of the time to execute the customization monitoring report, and it only takes one step to generate reports and analyze data. The results of this research are expected to be used as a study to provide the right solution in facilitating the process of checking the cost allocation on ABC to objectively monitor and analyze each business process (resource, activity, and cost object) and support the decision-making process.
The Use of Image Processing and Sensor in Tomato Sorting Machine by Color, Size, and Weight Marlindia Ike Sari; Rizal Fajar; Tedi Gunawan; Rini Handayani
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.944

Abstract

Tomatoes are a popular vegetable in Indonesia, where production increases every year according to market demand. The large production requires proper post-harvest handling both in quality and time. It has been well-known that sorting and grading are the first and foremost processes in the post-harvest process of tomatoes. Sorting tomatoes can be conducted by color and adjusted to the target market. The automation process in the post-sorting and grading process can save time and resources. This research proposes a sorting system that sorts tomatoes based on color, size, and weight. Tomatoes are sorted by red, yellow, and green colors. The detection of color and size was carried out by image processing with the OpenCV library. The color detection was carried out by HSV's red, yellow, and green values. In comparison, the dimensional measurement was carried out by determining the outermost point of the detected object both vertically and horizontally. At the same time, tomatoes' weight was measured by a weight sensor. This system was implemented into a prototype sorting system with a webcam, Arduino, a conveyor, and motors. The final part was a storage box used to accommodate tomatoes based on grading. The implementation has the maximum results for detecting color with 100% accuracy and measuring weight with 95% accuracy. However, it still needs development for dimensional measurements. In this research, it has only 5% accuracy. This proposed system can be used both in software and hardware design as an inline tomato sorting tool.
An Assessment Algorithm for Indoor Evacuation Model Khyrina Airin Fariza Abu Samah; Amir Haikal Abdul Halim; Zaidah Ibrahim
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.933

Abstract

The public buildings increased significantly with the economy's growth and the population's advancement. The complexity of the indoor layout and the involvement of many people cause the indoor evacuation wayfinding to the nearest exit to be more challenging during emergencies such as fire. In order to overcome the problem, each building is compulsory to follow the standard evacuation preparedness required by Uniform Building By-Law (UBBL). Researchers have also developed evacuation models to help evacuees evacuate safely during the evacuation from a building. However, building owners do not know which evacuation model is suitable for implementing the chosen high-rise building. Two problems were identified in choosing a suitable evacuation model during the decision-making process. First, many developed evacuation models focus on studying different features of evacuation behavior and evacuation time. Second, the validation and comparison of the evacuation model is the missing process before applying the suitable evacuation model. Both validation and comparison procedures were made independently without any standard assessment that encapsulates the critical incident features during the indoor evacuation and virtual spatial elements. Therefore, this research proposed an indoor evacuation assessment algorithm to solve the problem. The assessment algorithm refers to the elements developed in our previous study. We determined attributes, executed simulations, and evaluated the cluster performance using the developed framework. The outcome can help the building owners assess which suitable existing evacuation model is the best to implement at the chosen building.
Development of Programming Log Collection System Requirements Using Interface Requirement Analysis Techniques Huijae Park; Wongyu Lee; Jamee Kim
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.938

Abstract

As software affects each industry, companies are increasingly trying to recruit talent. Despite the interest and investment of companies, it is difficult to find talents with technical expertise and specialization in a specific field at the same time. As a result, companies have begun to discover talents who can overcome their lack of expertise through talents with high problem-solving skills. Countries worldwide that felt the need to discover competitive talents began to show changes in education for nurturing talents. In particular, the results of the expansion and increase of programming education that cultivate problem-solving ability have begun to be seen. However, programming education is different from existing education, and many learners have difficulties with the introductory process due to the difficult debugging process. In order to analyze the difficulties of introductory learners and support their learning, a system that can collect data from the programming process and analyze behavior types is required. There are several methods for deriving the system requirements, but the interface requirements analysis method was selected in this study. We approached how to process data in the system by deriving the type of data that the system administrator wants to collect. This study laid the foundation for a system that can analyze the programming process of introductory learners by deriving the functional and non-functional requirements required by the data collection system through interface requirements analysis.
Comparison of Apache SparkSQL and Oracle Performance: Case Study of Data Cleansing Process Ilma Nur Hidayati; Tien Fabrianti Kusumasari; Faqih Hamami
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.928

Abstract

A dataset with good quality is a valuable asset for a company. The data can be processed into information to help companies improve decision-making. However, the data increased more and more over time to decrease data quality. Thus, good data management is important to keep data quality meeting company standards. One of the efforts that can be done is conducting data cleansing to clean data from errors, inaccuracies, duplication, format discrepancies, etc. Apache Spark is an engine that can analyze large amounts of data. Oracle Database is a database management system used to manage databases. Both have their own reliability and can be used to analyze SQL-shaped data. This study compared Spark and Oracle performance based on query processing time. Both were tested on queries used to perform data cleansing of millions of rows of the dataset. The research focuses on finding out Spark and Oracle's performance through quantitative analysis. The results of this study showed that there were differences in query processing times on both tools. Apache Spark is rated better because it has a relatively faster query processing time than Oracle Database. It can be concluded that Oracle is more reliable in storing complex data models than in analyzing large data. For future research, it is suggested to add other comparison aspects such as memory and CPU usage. The researchers can also consider using query optimization techniques to enrich query experiments.
Implementation of Support Vector Regression for Polkadot Cryptocurrency Price Prediction Deny Haryadi; Arif Rahman Hakim; Dewi Marini Umi Atmaja; Syifa Nurgaida Yutia
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.945

Abstract

Cryptocurrency investment is an investment instrument that has high risk but also has a greater advantage than other investment instruments. To make a big profit, investors need to analyze cryptocurrency investments to predict the price of the cryptocurrency to be purchased. The highly volatile movement of cryptocurrency prices makes it difficult for investors to predict those prices. Data mining is the process of extracting large amounts of information from data by collecting, using data, the history of data relationship patterns, and relationships in large data sets. Support Vector Regression has the advantage of doing accurate cryptocurrency price predictions and can overcome the problem of overfitting by itself. Polkadot is one of the cryptocurrencies that are often used as investment instruments in the world of cryptocurrencies. Polkadot cryptocurrency price prediction analysis using the Support Vector Regression algorithm has a good predictive accuracy value, including for Polkadot daily closing price data, namely with a radial basis function (RBF) kernel with cost parameters C = 1000 and gamma = 0.001 obtained model accuracy of 90.00% and MAPE of 5.28 while for linear kernels with parameters C = 10 obtained an accuracy of 87.68% with a MAPE value of 6.10. It can be concluded that through parameter tuning, the model formed has an accuracy value and the best MAPE is to use a radial kernel basis function (RBF) with cost parameters C = 1000 and gamma = 0.001. The results show that the Support Vector Regression method is quite good if used for the prediction of Polkadot cryptocurrencies.
A Multi-Agent K-Means Algorithm for Improved Parallel Data Clustering Mohammed Ahmed Jubair; Salama A. Mostafa; Aida Mustapha; Zirawani Baharum; Mohamad Aizi Salamat; Aldo Erianda
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.934

Abstract

Due to the rapid increase in data volumes, clustering algorithms are now finding applications in a variety of fields. However, existing clustering techniques have been deemed unsuccessful in managing large data volumes due to the issues of accuracy and high computational cost. As a result, this work offers a parallel clustering technique based on a combination of the K-means and Multi-Agent System algorithms (MAS). The proposed technique is known as Multi-K-means (MK-means). The main goal is to keep the dataset intact while boosting the accuracy of the clustering procedure. The cluster centers of each partition are calculated, combined, and then clustered. The performance of the suggested method's statistical significance was confirmed using the five datasets that served as testing and assessment methods for the proposed algorithm's efficacy. In terms of performance, the proposed MK-means algorithm is compared to the Clustering-based Genetic Algorithm (CGA), the Adaptive Biogeography Clustering-based Genetic Algorithm (ABCGA), and standard K-means algorithms. The results show that the MK-means algorithm outperforms other algorithms because it works by activating agents separately for clustering processes while each agent considers a separate group of features.
Lightweight Generative Adversarial Network Fundus Image Synthesis Nurhakimah Abd Aziz; Mohd Azman Hanif Sulaiman; Azlee Zabidi; Ihsan Mohd Yassin; Megat Syahirul Amin Megat Ali; Zairi Ismael Rizman
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.924

Abstract

Blindness is a global health problem that affects billions of lives. Recent advancements in Artificial Intelligence (AI), (Deep Learning (DL)) has the intervention potential to address the blindness issue, particularly as an accurate and non-invasive technique for early detection and treatment of Diabetic Retinopathy (DR). DL-based techniques rely on extensive examples to be robust and accurate in capturing the features responsible for representing the data. However, the number of samples required is tremendous for the DL classifier to learn properly. This presents an issue in collecting and categorizing many samples. Therefore, in this paper, we present a lightweight Generative Neural Network (GAN) to synthesize fundus samples to train AI-based systems. The GAN was trained using samples collected from publicly available datasets. The GAN follows the structure of the recent Lightweight GAN (LGAN) architecture. The implementation and results of the LGAN training and image generation are described. Results indicate that the trained network was able to generate realistic high-resolution samples of normal and diseased fundus images accurately as the generated results managed to realistically represent key structures and their placements inside the generated samples, such as the optic disc, blood vessels, exudates, and others. Successful and unsuccessful generation samples were sorted manually, yielding 56.66% realistic results relative to the total generated samples. Rejected generated samples appear to be due to inconsistencies in shape, key structures, placements, and color.
Data-Centric Learning Method for Synthetic Data Augmentation and Object Detection Yeseong Park; Hyeongbok Kim; Yoon Jung Park; Changsin Lee; Jinsuk Lee
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.939

Abstract

This paper proposes a deep learning framework for decreasing large-scale domain shift problems in object detection using domain adaptation techniques. We have approached data-centric domain adaptation with Image-to-Image translation models for this problem. It is one of the methodologies that changes source data to target domain's style by reducing domain shift. However, the method cannot be applied directly to the domain adaptation task because the existing Image-to-Image model focuses on style translation. We solved this problem using the data-centric approach simply by reordering the training sequence of the domain adaptation model. We defined the features to be content and style. We hypothesized that object-specific information in images was more closely tied to the content than the style and thus experimented with methods to preserve content information before style was learned. We trained the model separately only by altering the training data. Our experiments confirmed that the proposed method improves the performance of the domain adaptation model and increases the effectiveness of using the generated synthetic data for training object detection models. We compared our approach with the existing single-stage method where content and style were trained simultaneously. We argue that our proposed method is more practical for training object detection models than others. The emphasis in this study is to preserve image content while changing the style of the image. In the future, we plan to conduct additional experiments to apply synthetic data generation technology to various other application areas like indoor scenes and bin picking.

Page 1 of 2 | Total Record : 20